7 research outputs found

    Composite trait Mendelian randomization reveals distinct metabolic and lifestyle consequences of differences in body shape

    Get PDF
    Obesity is a major risk factor for a wide range of cardiometabolic diseases, however the impact of specific aspects of body morphology remains poorly understood. We combined the GWAS summary statistics of fourteen anthropometric traits from UK Biobank through principal component analysis to reveal four major independent axes: body size, adiposity, predisposition to abdominal fat deposition, and lean mass. Mendelian randomization analysis showed that although body size and adiposity both contribute to the consequences of BMI, many of their effects are distinct, such as body size increasing the risk of cardiac arrhythmia (b = 0.06, p = 4.2 ∗ 10 <sup>-17</sup> ) while adiposity instead increased that of ischemic heart disease (b = 0.079, p = 8.2 ∗ 10 <sup>-21</sup> ). The body mass-neutral component predisposing to abdominal fat deposition, likely reflecting a shift from subcutaneous to visceral fat, exhibited health effects that were weaker but specifically linked to lipotoxicity, such as ischemic heart disease (b = 0.067, p = 9.4 ∗ 10 <sup>-14</sup> ) and diabetes (b = 0.082, p = 5.9 ∗ 10 <sup>-19</sup> ). Combining their independent predicted effects significantly improved the prediction of obesity-related diseases (p < 10 <sup>-10</sup> ). The presented decomposition approach sheds light on the biological mechanisms underlying the heterogeneity of body morphology and its consequences on health and lifestyle

    Meta-analysis of (single-cell method) benchmarks reveals the need for extensibility and interoperability

    Full text link
    Computational methods represent the lifeblood of modern molecular biology. Benchmarking is important for all methods, but with a focus here on computational methods, benchmarking is critical to dissect important steps of analysis pipelines, formally assess performance across common situations as well as edge cases, and ultimately guide users on what tools to use. Benchmarking can also be important for community building and advancing methods in a principled way. We conducted a meta-analysis of recent single-cell benchmarks to summarize the scope, extensibility, and neutrality, as well as technical features and whether best practices in open data and reproducible research were followed. The results highlight that while benchmarks often make code available and are in principle reproducible, they remain difficult to extend, for example, as new methods and new ways to assess methods emerge. In addition, embracing containerization and workflow systems would enhance reusability of intermediate benchmarking results, thus also driving wider adoption

    Synthesizing plausible futures for biodiversity and ecosystem services in Europe and Central Asia using scenario archetypes

    Get PDF
    Scenarios are a useful tool to explore possible futures of social-ecological systems. The number of scenarios has increased dramatically over recent decades, with a large diversity in temporal and spatial scales, purposes, themes, development methods, and content. Scenario archetypes generically describe future developments and can be useful in meaningfully classifying scenarios, structuring and summarizing the overwhelming amount of information, and enabling scientific outputs to more effectively interface with decision-making frameworks. The Intergovernmental Platform for Biodiversity and Ecosystem Services (IPBES) faced this challenge and used scenario archetypes in its assessment of future interactions between nature and society. We describe the use of scenario archetypes in the IPBES Regional Assessment of Europe and Central Asia. Six scenario archetypes for the region are described in terms of their driver assumptions and impacts on nature (including biodiversity) and its contributions to people (including ecosystem services): business-as-usual, economic optimism, regional competition, regional sustainability, global sustainable development, and inequality. The analysis shows that trade-offs between nature’s contributions to people are projected under different scenario archetypes. However, the means of resolving these trade-offs depend on differing political and societal value judgements within each scenario archetype. Scenarios that include proactive decision making on environmental issues, environmental management approaches that support multifunctionality, and mainstreaming environmental issues across sectors, are generally more successful in mitigating trade-offs than isolated environmental policies. Furthermore, those scenario archetypes that focus on achieving a balanced supply of nature’s contributions to people and that incorporate a diversity of values are estimated to achieve more policy goals and targets, such as the UN Sustainable Development Goals and the Convention on Biological Diversity Aichi targets. The scenario archetypes approach is shown to be helpful in supporting science-policy dialogue for proactive decision making that anticipates change, mitigates undesirable trade-offs, and fosters societal transformation in pursuit of sustainable development

    pipeComp, a general framework for the evaluation of computational pipelines, reveals performant single cell RNA-seq preprocessing tools

    Get PDF
    We present pipeComp (https://github.com/plger/pipeComp), a flexible R framework for pipeline comparison handling interactions between analysis steps and relying on multi-level evaluation metrics. We apply it to the benchmark of single-cell RNA-sequencing analysis pipelines using simulated and real datasets with known cell identities, covering common methods of filtering, doublet detection, normalization, feature selection, denoising, dimensionality reduction, and clustering. pipeComp can easily integrate any other step, tool, or evaluation metric, allowing extensible benchmarks and easy applications to other fields, as we demonstrate through a study of the impact of removal of unwanted variation on differential expression analysis

    Besca, a single-cell transcriptomics analysis toolkit to accelerate translational research

    No full text
    Single-cell RNA sequencing (scRNA-seq) revolutionized our understanding of disease biology. The promise it presents to also transform translational research requires highly standardized and robust software workflows. Here, we present the toolkit Besca, which streamlines scRNA-seq analyses and their use to deconvolute bulk RNA-seq data according to current best practices. Beyond a standard workflow covering quality control, filtering, and clustering, two complementary Besca modules, utilizing hierarchical cell signatures and supervised machine learning, automate cell annotation and provide harmonized nomenclatures. Subsequently, the gene expression profiles can be employed to estimate cell type proportions in bulk transcriptomics data. Using multiple, diverse scRNA-seq datasets, some stemming from highly heterogeneous tumor tissue, we show how Besca aids acceleration, interoperability, reusability and interpretability of scRNA-seq data analyses, meeting crucial demands in translational research and beyond.ISSN:2631-926

    Meta-analysis of (single-cell method) benchmarks reveals the need for extensibility and interoperability

    No full text
    Computational methods represent the lifeblood of modern molecular biology. Benchmarking is important for all methods, but with a focus here on computational methods, benchmarking is critical to dissect important steps of analysis pipelines, formally assess performance across common situations as well as edge cases, and ultimately guide users on what tools to use. Benchmarking can also be important for community building and advancing methods in a principled way. We conducted a meta-analysis of recent single-cell benchmarks to summarize the scope, extensibility, and neutrality, as well as technical features and whether best practices in open data and reproducible research were followed. The results highlight that while benchmarks often make code available and are in principle reproducible, they remain difficult to extend, for example, as new methods and new ways to assess methods emerge. In addition, embracing containerization and workflow systems would enhance reusability of intermediate benchmarking results, thus also driving wider adoption.ISSN:1474-760
    corecore